![]() auto-localization system and its operating method
专利摘要:
Summary "Self-locating system and its operating method" The present invention relates to an autonomous system for locating an individual, comprising: a wheeled vehicle; a 3d depth camera, distance and / or touch sensors; at least two engines, each for moving one or more wheels on both left and right sides of the above vehicle; a data processing module, comprising a decision unit configured for: when the depth camera 3d detects that the distance between the system and the subject to be followed is less than a predetermined limit, slowing down both engines; When the 3d depth camera detects that the person to be followed exceeds a predetermined limit to the left or right of a predefined horizontal point, increase the right engine speed by decreasing the left engine speed or vice versa. . 公开号:BR112015006203A2 申请号:R112015006203 申请日:2013-09-19 公开日:2019-11-26 发明作者:Raquel Correia Figueira Ana;José Ferro Martins David;Rodrigo Fragoso Mendes Gardede Correia Luis;Carlos Inácio De Matos Luís;Carlos Gonçalves Adaixo Michael;José Guerra De Araújo Pedro;Manuel Dos Santos Nascimento Pedro;Samuel Cruz Ponciano Ricardo;Emanuel Neves Goulão Vitor 申请人:Follow Inspiration Unipessoal Lda; IPC主号:
专利说明:
SELF-LOCATION SYSTEM AND ITS METHOD OPERATIONAL Technical field [0001] The present invention refers to a fully autonomous robotic system and the respective operating methods, consisting of motors, batteries, control panels, processing units, sensors and 3D depth cameras. This system is capable of following a specific person or moving independently in space. [0002] It is therefore useful for transporting objects in different everyday situations. It can be used by anyone, including people with reduced mobility. This system is applied in distribution domains or in any other domains that benefit from object-carrying technologies that use image recognition and artificial intelligence. Background to the invention [0003] According to the International Labor Organization, there are more than 600 million people with disabilities [1] in the world. Approximately 100 to 140 million suffer from physical disabilities and forecasts point to an increase of approximately 29,000 people per year [2]. Considering only EU countries, the number of elderly and / or disabled people is between 60 and 80 million [3]. These numbers are growing and it is estimated that in the year 2020, 25% of the population will present 2/23 some type of disability. In the U.S., approximately 54 million people are disabled, 61% of whom are still of working age, and this number is estimated to double in just 15 years. [0004] Physical disability and particularly the problem of mobility prevail in approximately 40% of the population with disabilities in the European Union [4] and in 7% of the North American population [5]. Recent data show an increase of 22% (2.7 M to 3.3 M) from 2008 to 2009, considering people in wheelchairs [6]. [0005] The circumstances that cause this condition do not always result from aging. Although a large part of these situations are due to strokes and arthritis, typically associated with aging, other situations such as multiple sclerosis, the lack or loss of lower limbs, paraplegia and orthopedic problems, not associated with aging, also contribute significantly to these numbers. [0006] Regarding the elderly [7] it is expected that in 2050 the number of elderly people will exceed the number of young people for the first time in history. There are currently approximately 800 million elderly people. In 2050 this number will exceed 20 billion people. [0007] Analyzing the market and our national customers, considering only the two largest commercial groups, it can be seen that the Sonae group [8] has approximately 1000 stores, while the Jerónimo Martins group [9] has approximately 350 stores. These figures show the potential of our market in terms of purchases, considering only two commercial groups. We should not 3/23 forget that the solution can also be adopted by industry and other commercial groups. Looking at the international market, in the U.S. [10] there are approximately 105,000 shopping centers. In Europe [11], in 2007, data point to the existence of approximately 5,700 traditional shopping centers. Airports are another scenario in which the device, described as wi-go in this document, can be applied. According to the data, there are approximately 15,000 airports, in the United States of America alone. In the European Union [12] 800 million passengers were registered in 2010 alone. [0008] These are just a few numbers that help us to understand the importance of developing solutions that consider what will be the future of societies. Many of the difficulties faced by these people are related to mobility problems. In this sense, we believe that we can contribute to improving the quality of life, considering that the solutions and products developed place technology at the service of social integration, cohesion and responsibility. [0009] References: 1. Facts on Disability in the World of Work, International Labor Organization (ILO) 2. Estimated data from the European Disability Forum: About Disability and National Center for Health Statistics 3. World Health Organization (WHO) 4. European Disability Forum: About Disability, http: //www.edf- 4/23 feph.org/Page_Generale.asp DocID=12534 5. National Center for Health Statistics. 6. U.S: Census Bureau, 2010 (www.census.gov) 7. Population and Aging: Facts and Figures, http: //www.unric.org/ html / Portuguese / ecosoc / ageing / Idosos-Factos .pdf 8. Sonae's activity in 2010 http: //www.sonae.pt/fotos/editor2/ 0110316pr_resuitadosfy2010_vf_130 0 98227 6. pdf 9. Information Jeronimo Martins http: //www.jeronimomartins. pt / pt / _food distribution / portugal_pd.html 10. http: //www.icsc.org/srch/faq_category.php Cat_typ e = research & cat_id = 3 11. http: //www.icsc.org/srch/faq_category.php Cat_typ e = research & cat_id = 3. http://europa.eu/rapid/pressReleasesAction. do ref erence = MEM0 / 11/857 & format = HTML & aged = 0 & ianguage = PT & guiLanguage = en [0010] 0 document W02007034434A2 describes a device and a computer program to recognize the movement of an object or person by capturing video images conventional (eg RGB). Video images are processed through logical processing, using corresponding block codes. The matching block defines a pixel block in an image search block and pixels in a region of the subsequent image, and, by the location of the search region in the subsequent image, the correspondence is automatically 5/23 adjusted based on the measured values. [0011] The document US2007045014A1 describes a conventional scooter controlled by a joystick, which is used to transport people and to circulate on the street. It doesn't have any shopping basket, it doesn't follow people, it doesn't have cameras and the engine and steering system has an axle that connects the wheels and serves to guide the steering. [0012] Document W02004039612A2 describes a system / platform similar to the base of a caterpillar type caterpillar that allows the attachment of wheelchairs, for example, in order to allow them to cross a difficult terrain, which they would not be able to cross by itself (such as stairs, for example). He does not use cameras or follow people. Summary of the invention [0013] The present invention relates to an autonomous system designed to follow a person, comprising: 1. a wheeled vehicle; 2. a 3D depth camera; 3. distance and / or touch sensors; 4. at least two engines, each to drive one or more wheels respectively on each of two sides, left and right side of the vehicle, and a data processing module, comprising a decision unit configured for: based on the person's location in space, the system calculates an error. Based on this error, the system changes the speed of each engine 6/23 shape that can be moved to an optimal position behind the user; when the person's location deviates to the right or to the left, assign different speeds to the engines in order to generate a curved path, allowing the system to turn; when the person's location has a forward or backward deviation, assign the same speed values to the engines to make the system follow a linear path, allowing the user to follow in a straight line forward and backward. [0014] According to a preferred embodiment, the system is additionally configured to stop the engine when it detects the proximity and / or the touch of an object by means of the distance and / or touch sensors mentioned above. [0015] According to a preferred embodiment, the system is additionally configured to stop both engines when the 3D depth camera detects that the distance between the system and the position of the individual to be followed is less than a safety limit predetermined. [0016] According to a preferred embodiment, the system is additionally configured to, before following the individual, recognize the individual to be followed by facial recognition, predefined gesture recognition, RFID tag or barcode. [0017] According to a preferred embodiment, the system is additionally configured for, before 7/23 follow the individual: detect the presence of a human shape with the 3D depth camera; recognize the individual to be followed by a predefined recognition gesture, facial features or position; assign a unique identification of the recognized individual to the detected human form; activate the location of the recognized individual. Brief description of the figures [0018] The figures which follow present preferred embodiments to illustrate the description and are not to be considered as limiting the scope of the present invention. Figure la: Schematic side plan of an embodiment, where: (A) represents the tracking device attached to a shopping cart; (1) represents engines and batteries; (2) represents a 3D depth camera; (3) represents obstacle sensors (distance and / or touch). Figure lb: Schematic representation of an embodiment, where: (A) represents the tracking device, attached to a shopping cart; 8/23 Figure Figure Figure (B) represents a wheelchair, in which the user moves; (1) represents engines and batteries; (2) represents a 3D depth camera; (3) represents obstacle sensors (distance and / or touch) and; (4) represents the control and power supply elements. lc: Schematic representation of an embodiment, in which are represented: (1) - 3D depth camera (2) - H bridge - Left side motor (3) - H bridge - Right side motor (4) - Movement and direction indicator (5) - Serial communication (6) - RS 232 (7) - USB (8) - Distance sensors (9) - Sensor control unit (10) - Motor control unit (11) - Main control unit 2: Schematic representation of the system's operational method. 3: Schematic representation of the user detection and recovery method. Figure 4: Schematic representation of the tracking method 9/23 active. Figure 5: Schematic representation of the obstacle detection and avoidance method. Figure 6: Schematic representation of the system's operational method, in which are represented: (1) - Obtaining an image of the skeleton from the 3D camera (2) - User detection and recovery algorithm (3) - Active tracking algorithm (4) - Motor control unit communication Figure 7: Schematic representation of the user detection and recovery method, in which are represented: (1) - Get skeleton image (2) - Detect new users (3) - User is lost (4) - Update user skeleton information (5) - Absence of skeleton to locate (6) - New skeleton to locate. Figure 8: Schematic representation of the active tracking method, in which are represented: (1) - Skeleton position data (2) - Calculation of the lateral error with DID for the lateral position (3) - Calculation of the distance error with PID for the 10/23 distance position (4) - Engine speed calculation from distance error (5) - Engine speed calculation from side error (6) - Obstacle detection and avoidance algorithm (7) - Calculation of final speed. Figure 9: Schematic representation of the obstacle detection and avoidance method, in which are represented: (1) - Get sensor readings (2) - Update sensor data information (distance, name and position of the sensor) (3) - Check the distance and locations of the sensors in the parameterized safety ranges (4) - Obstacle detected on one side of the wigo (5) - Obstacle detected on several sensors oriented in the same direction (6) - Create a speed modifier to decrease the speed of the two engines (7) - Create a speed modifier based on the location read by the sensor for both engines independently. Figure 10: Schematic representation of the visualization parameters of the system, in which 11/23 represented: (1) - Ideal position (SetPoint) (2) - Point of origin (wi-GO) (3) - Maximum limit of the X axis (4) - Minimum limit of the X axis (5) - Z axis / Ideal X position ( 6) - Maximum Z axis limit (7) - Optimal Z position (8) - Minimum Z limit (9) - X axis Figure 11: Schematic representation of the turning method in which they are represented: (1) - User skeleton position (z, x) (2) - wiGO (0,0) (3) - Right wheel (4) - Left wheel (5) - Reference Z axis (6) - Alpha, angle between (5) and (1) (7) - Alpha, same angle as (6) (8) - desired position of the right wheel (9) - desired position of the left wheel (10) - distance between wheels (11) - Turning radius (12) - Instantaneous bending center Detailed description of the invention [0019] The system works as a standalone device designed to follow people, disabled or not 12/23 thus allowing objects to be transported without difficulty, making their lives easier, providing more comfort and safety to those who already face several obstacles daily. [0020] Among the main characteristics of the product, the following are highlighted: Assistance in the transport of objects by people with reduced mobility and any others; Unequivocal and aligned location of a specific person along a trajectory; Detection of obstacles and hazards and their avoidance; Image recognition based on real 3D depth cameras, preventing the need to havesystem. one device for communicate with [0021] From the point in View aesthetic, the gift invention Can be presented with same dimensions of carts conventional shopping, which can be found in supermarkets. In addition, keeping in mind the respective mass production, as well as the purpose of the respective use, it is possible to redesign the product, meeting the customer's needs, increasing the performance of the cart itself, eg. luggage trolleys, baby strollers, an additional wheelchair, a stroller offline shopping, the transport in any type of goods, among others.[0022] According It's evident at figure 1, the system comprises one or plus: engines, batteries cameras 3D depth, distance sensors, control panels 13/23 (eg sensor control panel, engine control panel and general control panel) and processing units (eg tablet, pc or pc components) that can be applied to any physical system. [0023] This combination of components and technologies, associated with a suitable operational method, is the object of our invention and introduces a new concept in the field of location and transport of objects. [0024] The illustration shows how the components that make up our system communicate and are connected according to a preferred embodiment. [0025] Description of each component: - Motors - component that supplies energy to the system and allows it to be moved. - Battery - component that provides energy autonomy to the system. - 3D depth cameras - component with the ability to view the surroundings, allowing to properly recognize people, human faces, human voices, gestures and surrounding objects in a 3D environment. Distance sensors - components that analyze the distances of the system according to the objects in the environment in which it operates. With this component the system can detect and prevent collisions. - Control panels - this component receives and sends data to the various components, as well as the distribution of energy from the battery. The control panels include all panels in the systems. 14/23 Processing unit - this unit processes all the information of all components, as well as all the processing software created. Once the components are combined, we obtain a system capable of locating a person who can be applied to: - Shopping centers - to transport purchases made by customers or replacements to be made by employees. - Airports - for the transportation of luggage, either by customers or employees. - House - transport of objects in the residence. Hospitals - to transport medication, meals, clothing, processes or other necessary objects in the various departments. Industry - to transport objects between sections. - Or in any other scenario that needs to transport products or that needs an autonomous system. [0026] It should be noted that the system for shopping centers, airports and homes is intended for people with disabilities or with reduced mobility, but it can also be used by anyone. Regarding the hardware (see Fig. 2), the system consists of two main units: a Control and Acquisition Unit and a Decision Unit. [0027] The Control and Acquisition Unit is the hardware and software module that transmits orders to the engines and performs a reading of the values of the 15/23 distance. This communicates with the Decision Unit via a data connection, eg. Ethernet or USB. Most motion commands are received by the Decision Unit. However, in situations of great proximity to obstacles, this module (Control and Acquisition Unit) has the ability to order the immediate stop or to recalculate the trajectory. Subsequently transmits the information from the sensors to the Decision Unit. [0028] The Decision Unit is the software module that is connected to a 3D depth camera. This camera provides the identification of the current user and, based on the respective position in relation to the system, allows the decision module to calculate movement or stop commands. [0029] In relation to the software, the capture of scenes (see Figure 3) allows the system to identify objects and people. Using the 3D depth camera, the system can obtain the distances of these objects. The 3D depth camera has the ability to detect a human shape. This is accomplished by a projection of an infrared matrix from inside the camera. A sensory depth camera is able to read the location of the infrared points in the matrix and to calculate the distances from one to the other. The algorithms integrated in the 3D depth camera libraries allow to recognize the shape of a person through this data. Biometric data can be calculated using variables such as height, length of arms, legs, basically the entire human physiognomy. [0030] The identification of objects is performed by means of 16/23 of the same depth camera, adding features of an RGB camera, providing greater reliability of data related to objects, as long as color is added to the shape. [0031] Regarding the recognition of the person, as soon as the system is turned on and ready to be used, it will depend on the interaction with people. As explained above, the depth camera has the purpose of identifying a human face. Through gestures, position, voice or personal characteristics (structure and / or facial recognition) a virtual link between the system and the person recognized by the depth camera is created. As long as the person remains within the viewing angle of the depth camera, the system will not lose sight of the user. [0032] Another alternative recognition system implemented is based on the use of bar codes or two-dimensional codes (QR codes) that will be arranged by the person in front of the camera and, using the appropriate algorithm, associates the person with the system. This system is aimed at people with problems in their physical movements who are unable to make the pre-configured gesture. [0033] Another embodiment refers to the use of voice commands or facial recognition. The recognition and analysis procedure is the same as that used for recognizing people. [0034] Calibration consists of adjusting the camera in relation to people, considering that the respective physical characteristics vary greatly from one individual to 17/23 other. The depth camera features a motor that allows adjustment of the angle in the horizontal plane. When the first identification occurs, we can adjust the camera to ensure that the person is framed in the region of the image received by the depth camera. [0035] Regarding this operation, considering that the system recognizes the person, it will locate and follow the user. In this process, the algorithm receives the data collected by the depth camera and processes it, in order to be able to identify the person's movements. [0036] The depth camera collects data from the person who is following. This data structure contains information related to the user's distance and position. Based on these distances and relative positions, differentiating values can be seen during user identification and allow the system to follow the person. [0037] Therefore, the system can identify where the person is going (forward, backward, sideways, rotating, etc.). The system keeps the person in the respective vision center at a fixed distance that is displayed and programmable. When moving sideways and leaving the system's center of view, it calculates the error (the distance from the person to the center of view) and calculates the force that must be applied to the engine on the left and / or the right side, in in order to readjust and place the person in the respective center of vision, depending on whether the person moves left and / or right. [0038] In relation to the forward and backward speed, based on the depth chamber, the system identifies how far the person is from the system, maintaining 18/23 always the same distance. When the user moves, the error is calculated (position of the user in relation to the ideal distance that separates the system from this) and correct commands are provided to correct these distances. This process allows the forward and backward movement of the system to maintain the ideal distance from the person, always maintaining a constant alignment with them. By combining these data, the system follows the person in all possible directions. [0039] The diagram in Figure 4 shows the operation described above. [0040] With regard to the system's artificial intelligence, this document describes a decision support system (see Figure 5) based on heuristics and artificial intelligence algorithms, which assist in specific tasks such as: - Facial recognition; - Voice recognition; - Curvature and trajectories; - Braking; - Escape routes; - Acceleration; - Battery management; - Ability to react in unexpected situations; - Ability to react to errors; - Ability to avoid obstacles; Recognition improved in people and inobjects. [0041] In relation to systems in safety, O device also comprises distance sensors 19/23 planned to avoid collisions with the surrounding environment. The sensors are able to analyze obstacles, send signals by ultrasound and, depending on the time it takes to reach an obstacle and return to the sensor, calculate the distance to the object. Based on this information, the system checks if there are any obstacles on the way, observing a specified safety distance, at which the system must stop or create a curved path around the object to avoid collision. These distances have millimeter accuracy according to a preferred embodiment. [0042] At the slightest doubt or sign that the system is not following the operating mode configured in the algorithm, the system is forced to stop by default. [0043] According to a preferred embodiment, a set of audible warnings and notifications written on the screen is defined to indicate the correct functioning of the system. Starting with the warning of the battery charge level, until the notification that indicates if the system has lost the person or if any of the components has malfunctions or is broken, among others. [0044] One of the main components of the system is the 3D depth camera. Additional cameras to be used in the system can be considered, such as Microsoft's Kinect Sensor, Asus Xtion Pro Sensor, PrimeSense Capri or any other device with the same technical specifications and the same operation. [0045] The system can be developed using the official Microsoft SDK or the OpenNI open source SDK, which detects any hardware from any video camera. 20/23 3D depth, allowing the development of the system regardless of the camera used. [0046] In short, any sensory camera capable of capturing a scenario in real time, as well as capable of segmentation and recognition of the human form by the system, can be used. [0047] The following embodiments, although more specific, are also applicable to the entire disclosure. [0048] Figure 6 describes the general operation of the algorithm. First, the 3D depth camera collects an image of the skeleton (1), which consists of a structure of skeleton data. A skeleton is a data structure that contains information about the user's joints and bones, as well as their overall position. When skeletons are available, these data are sent to the User Detection and Recovery Algorithm (2). This will return a user to be followed or not. If there are no users to be followed, the system will do nothing and wait for the next cycle. If there are users to be followed, the system will provide the active tracking algorithm for the skeleton of the located user (3). This procedure will return the final speed to be sent to the engines, through the Communication module of the Engine Control Unit (4), and then the cycle is repeated. [0049] The User Detection and Recovery Algorithm (Fig. 7) describes how the system creates a virtual connection with the user's skeleton. Once a collection of skeletons has been collected, it will check whether 21/23 deals with a new user or not, checking if he was already following someone. In case you are not following someone, you will try to block a user by voice, facial, gestural or postural detection. This procedure consists of blocking the user and returning a skeleton and a skeleton ID to locate. [0050] When the system is already following a user, it checks whether the collection of skeletons has the same location ID as the user previously located. When the same user ID is found, the skeleton information for this user is updated and returned. When the skeleton collection does not have the currently located ID, the system assumes it has lost the user. [0051] The Active Tracking Algorithm (Fig. 8) accepts a user skeleton and analyzes the respective global position (1). X defines how far the user is to the side. Z defines the user's distance to the device, as explained in figure 10. Based on this information, two different PID controllers are provided with these values, one for each axis. PIDs return two errors based on predefined set points. The configuration points are the optimal position the user should be in, which are configured in the PID. These two errors define whether the system moves forward, backward, left or right. Considering these errors (2) (3), a final speed (4) (5) will be calculated. In the event that these errors are greater than a predefined limit, the different speeds are combined to form a final speed (7) and return the value. At the same time, the Detection and Avoidance Algorithm 22/23 Obstacles returns a speed modifier to be multiplied in the calculation of the final speed (7). A speed modifier is a value between 0 and 1 that represents the percentage of the speed to be considered, with 1 being the maximum speed and 0 being the minimum speed. [0052] This procedure will act as a braking system that will reduce the speed of the wheels, in case of approaching an obstacle. [0053] The speed modifier is a value between 0.0 and 1.0 that will decrease or not the motor speed, regardless of whether it is for avoiding obstacles or for detecting obstacles. [0054] Figure 9 describes the Obstacle Detection and Avoidance Algorithm. This component reads information from the distance sensors (1), analyzes and updates the data structure (2). There is a check on each sensor to verify that the distances are less than predefined safety regions (3). When the sensors detect an obstacle in the direction in front of or behind the device (6), the system calculates a speed modifier for both engines (6), decreasing the speed of or stopping the device. When the sensors detect an obstacle next to the device (4), the system calculates two different speed modifiers (7) to assign to each motor, resulting in a curved movement, allowing the obstacle to be avoided. [0055] Figure 10 represents a schematic view of the system. This describes the process of actuation of the device (2). The device (2) follows the user around the clock 23/23 correcting the respective position in relation to the user, so that the user remains in the Safe Region / Non-Operating Region. [0056] The system has two actuation axes, the X axis (9) and the Z axis (5). The X axis represents the user's position laterally in relation to the device. The Z axis represents the user's distance or proximity to the device. [0057] The system has limits, namely minimum (4) and maximum (3) of the X axis and minimum (8) and maximum (6) of the Z axis. These limits define when the system should act. When the user is within these limits, the device does not move. When the user is outside the limits, the device starts to move. When the user is further away from the maximum limit Z (6), the device follows the user. When the user is closer to the lower limit, the device moves back. [0058] Figure 11 describes how the system calculates the acceleration that each wheel needs to follow a user during a curved path. [0059] Figure 11a describes a scenario in which user 1 is to the right of the device reference setpoint (5) (2). In this scenario, the system needs to calculate the acceleration to be assigned to the engines on the left (4) and right (3), in order to calculate the angle that the user is defining in relation to the reference set point / Z axis (5). [0060] In Figure 11b, the system calculates the angle that the user is defining (6). You can create the same angle 24/23 curvature (7) to calculate the acceleration for the wheels in order to create the outer curved path (9) and the inner curved path (8). These trajectories correct the angle (6) to zero, as shown in Figure 11C. [0061] Figure lld describes the speed calculation process for the two wheels. The final speed for both the left wheel (9) and the right wheel (8) is calculated as follows: - Left wheel = (R + b / 2) * Alpha - Right wheel = (R - b / 2) * Alpha where R is the turning radius (11), b is the distance between wheels (10) and Alpha is the angle of rotation (7). The Instant Curvature Center (12) defines at which point the device turns. When the turning radius is zero, the ICC is half the distance between wheels. [0062] The above embodiments can be combined. The claims that follow show specific embodiments of the present invention.
权利要求:
Claims (5) [1] 1. Autonomous system for locating a person characterized by the fact of understanding: The. a wheeled vehicle; B. a 3D depth camera; ç. distance and / or touch sensors; d. at least two engines, each to move one or more wheels respectively in each of two sides, left side and side right in relation to running direction of above vehicle, and;and. a data processing module, understanding one unit configured decision for:based on location of the person in the space in in relation to the aforementioned vehicle, the system calculates an error between the intended location of the person in space and the actual location of the person in space, and; based on this error, the system changes the speed of each engine so that the vehicle moves to reduce the above error, to move the vehicle in the direction of the intended position for the vehicle behind the person; when the person's location error is such that the actual location of the person in space is to the right or left of the person's intended location in space, different speeds are assigned to the engines in order to make the vehicle follow a curved path, causing that the [2] 2/3 vehicle turn; when the person's location error is such that the actual location of the person in space is in front of or behind the intended location of the person in space, the same speed values are assigned to the engines in order to make the system follow a linear path, making the vehicle follow a straight line. 2. System according to the preceding claim 1 characterized by the fact that it is additionally configured to stop the two motors when it detects the proximity and / or the touch of an object by means of the distance and / or touch sensors mentioned above. [3] System according to either of the preceding claims 1 or 2, characterized in that it is additionally configured to stop both engines when the 3D camera detects that the distance between the system and the person to be located is less than a safety limit predetermined. [4] System according to any of the preceding claims 1, 2 or 3, characterized in that it is additionally configured to, before locating the person, recognize the aforementioned person by facial recognition, predefined gesture recognition, predefined RFID tag or code preset bar. [5] 5. System according to any one of the preceding claims 1, 2, 3 or 4, characterized in that it is additionally configured to, before locating the person, perform the following steps: 3/3 detect the presence of a human shape with the 3D depth camera; recognize the person to be located by predefined facial recognition, predefined gesture recognition, predefined RFID tag or predefined bar code; assign a unique ID to the detected human form, and; activate the location of the recognized person.
类似技术:
公开号 | 公开日 | 专利标题 BR112015006203A2|2019-11-26|auto-localization system and its operating method US9355368B2|2016-05-31|Computer-based method and system for providing active and automatic personal assistance using a robotic device/platform US9796093B2|2017-10-24|Customer service robot and related systems and methods US8983662B2|2015-03-17|Robots comprising projectors for projecting images on identified projection surfaces US10606275B2|2020-03-31|System for accurate object detection with multiple sensors CN108241844B|2021-12-14|Bus passenger flow statistical method and device and electronic equipment CN104718507A|2015-06-17|Autonomous traveling device traveling-information generation device, method, and program, and autonomous traveling device CN110103952A|2019-08-09|Assist method, equipment, medium and the system of vehicle drive US20200411154A1|2020-12-31|Artificial intelligence robot and method of controlling the same Algabri et al.2020|Deep-learning-based indoor human following of mobile robot using color feature KR20190077482A|2019-07-03|Method and system for adjusting the orientation of a virtual camera while the vehicle is turning Manta et al.2019|Wheelchair control by head motion using a noncontact method in relation to the pacient Mamun et al.2018|Autonomous bus boarding robotic wheelchair using bidirectional sensing systems Liu et al.2018|Enabling autonomous navigation for affordable scooters Koyama et al.2016|IR tag detection and tracking with omnidirectional camera using track-before-detect particle filter WO2021046607A1|2021-03-18|Object moving system Pati et al.2017|Vision-based robot following using pid control Satoh et al.2009|A secure and reliable next generation mobility—An intelligent electric wheelchair with a stereo omni-directional camera system— Ishizuka et al.2017|Motion control of a powered wheelchair using eye gaze in unknown environments US20190224850A1|2019-07-25|Communicative Self-Guiding Automation Miro et al.2012|Low-cost visual tracking with an intelligent wheelchair for innovative assistive care EP3729243A1|2020-10-28|User-wearable systems and methods to collect data and provide information WO2018061616A1|2018-04-05|Monitoring system Ghandour et al.2016|Interactive collision avoidance system for indoor mobile robots based on human-robot interaction Choi et al.2017|Design of self-localization based autonomous driving platform for an electric wheelchair
同族专利:
公开号 | 公开日 PL2898384T3|2020-05-18| WO2014045225A1|2014-03-27| CA2885630A1|2014-03-27| EP2898384B1|2019-10-16| PT2898384T|2020-01-21| ES2766875T3|2020-06-15| US20150229906A1|2015-08-13| CA2885630C|2020-09-29| EP2898384A1|2015-07-29| US9948917B2|2018-04-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 DE19509320A1|1995-03-15|1996-09-19|Technologietransfer Anstalt Te|Sequence control for a self-driving vehicle| WO2004039612A2|2002-10-29|2004-05-13|Benjamin Sharon|Intelligent terrain-traversing vehicle| WO2005098729A2|2004-03-27|2005-10-20|Harvey Koselka|Autonomous personal service robot| JP4316477B2|2004-11-18|2009-08-19|パナソニック株式会社|Tracking method of mobile robot| US20070045014A1|2005-08-29|2007-03-01|Greig Mark E|Maneuverable motorized personally operated vehicle| WO2007034434A2|2005-09-26|2007-03-29|Koninklijke Philips Electronics N.V.|Method and device for tracking a movement of an object or of a person| CA2837477C|2005-10-14|2016-06-21|Aethon, Inc.|Robotic ordering and delivery apparatuses, systems and methods| CN101449180A|2006-05-17|2009-06-03|优尔影技术有限公司|Robotic golf caddy| US20110026770A1|2009-07-31|2011-02-03|Jonathan David Brookshire|Person Following Using Histograms of Oriented Gradients| EP2506106B1|2009-11-27|2019-03-20|Toyota Jidosha Kabushiki Kaisha|Autonomous moving object and control method| US8213680B2|2010-03-19|2012-07-03|Microsoft Corporation|Proxy training data for human body tracking| GB2502213A|2010-12-30|2013-11-20|Irobot Corp|Mobile Human Interface Robot| US9204823B2|2010-09-23|2015-12-08|Stryker Corporation|Video monitoring system|WO2016142794A1|2015-03-06|2016-09-15|Wal-Mart Stores, Inc|Item monitoring system and method| US9896315B2|2015-03-06|2018-02-20|Wal-Mart Stores, Inc.|Systems, devices and methods of controlling motorized transport units in fulfilling product orders| US20180099846A1|2015-03-06|2018-04-12|Wal-Mart Stores, Inc.|Method and apparatus for transporting a plurality of stacked motorized transport units| CA2936394A1|2015-07-17|2017-01-17|Wal-Mart Stores, Inc.|Shopping facility assistance systems, devices, and methods to identify security and safety anomalies| US9691248B2|2015-11-30|2017-06-27|International Business Machines Corporation|Transition to accessibility mode| CA2961938A1|2016-04-01|2017-10-01|Wal-Mart Stores, Inc.|Systems and methods for moving pallets via unmanned motorized unit-guided forklifts| US10527423B1|2016-04-07|2020-01-07|Luftronix, Inc.|Fusion of vision and depth sensors for navigation in complex environments| CN105825627B|2016-05-23|2018-03-27|蔡俊豪|A kind of perambulator theft preventing method and burglary-resisting system| CN106056633A|2016-06-07|2016-10-26|速感科技(北京)有限公司|Motion control method, device and system| US20180053304A1|2016-08-19|2018-02-22|Korea Advanced Institute Of Science And Technology|Method and apparatus for detecting relative positions of cameras based on skeleton data| US10689021B2|2016-08-27|2020-06-23|Anup S. Deshpande|Automatic load mover| DE102016221365A1|2016-10-28|2018-05-03|Ford Global Technologies, Llc|Method and apparatus for automatically maneuvering a wheelchair relative to a vehicle| CN106778471B|2016-11-17|2019-11-19|京东方科技集团股份有限公司|Automatically track shopping cart| CN106681326B|2017-01-04|2020-10-30|京东方科技集团股份有限公司|Seat, method of controlling seat movement and movement control system for seat| JP2020507160A|2017-01-20|2020-03-05|フォロー インスピレーション,エセ.アー.|Autonomous robot system| CN109144043A|2017-06-27|2019-01-04|金宝电子工业股份有限公司|The method for tracking object| US20190041868A1|2017-08-03|2019-02-07|Walmart Apollo, Llc|Autonomous Vehicle-Based Item Retrieval System and Method| DE102017216702A1|2017-09-21|2019-03-21|Zf Friedrichshafen Ag|Control unit for the autonomous operation of a light motor vehicle| DE102017218498A1|2017-10-17|2019-04-18|Zf Friedrichshafen Ag|Control unit for the autonomous operation of a light motor vehicle| US10818031B2|2017-11-22|2020-10-27|Blynk Technology|Systems and methods of determining a location of a mobile container| JP6981241B2|2017-12-26|2021-12-15|トヨタ自動車株式会社|vehicle| WO2020014991A1|2018-07-20|2020-01-23|Lingdong TechnologyCo. Ltd|Smart self-driving systems with side follow and obstacle avoidance| US11174022B2|2018-09-17|2021-11-16|International Business Machines Corporation|Smart device for personalized temperature control| CN110945450A|2018-10-10|2020-03-31|灵动科技(北京)有限公司|Human-computer interaction automatic guidance vehicle| CN110187639B|2019-06-27|2021-05-11|吉林大学|Trajectory planning control method based on parameter decision framework|
法律状态:
2018-11-21| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-06-23| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2021-11-16| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]| 2022-03-08| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 PT10654212|2012-09-19| PCT/IB2013/058672|WO2014045225A1|2012-09-19|2013-09-19|Self tracking system and its operation method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|